16 research outputs found

    Wellington : a novel method for the accurate identification of digital genomic footprints from DNase-seq data

    Get PDF
    The expression of eukaryotic genes is regulated by cis-regulatory elements such as promoters and enhancers, which bind sequence-specific DNA-binding proteins. One of the great challenges in the gene regulation field is to characterise these elements. This involves the identification of transcription factor (TF) binding sites within regulatory elements that are occupied in a defined regulatory context. Digestion with DNase and the subsequent analysis of regions protected from cleavage (DNase footprinting) has for many years been used to identify specific binding sites occupied by TFs at individual cis-elements with high resolution. This methodology has recently been adapted for high-throughput sequencing (DNase-seq). In this study, we describe an imbalance in the DNA strand-specific alignment information of DNase-seq data surrounding protein–DNA interactions that allows accurate prediction of occupied TF binding sites. Our study introduces a novel algorithm, Wellington, which considers the imbalance in this strand-specific information to efficiently identify DNA footprints. This algorithm significantly enhances specificity by reducing the proportion of false positives and requires significantly fewer predictions than previously reported methods to recapitulate an equal amount of ChIP-seq data. We also provide an open-source software package, pyDNase, which implements the Wellington algorithm to interface with DNase-seq data and expedite analyses

    Failing to progress or progressing to fail? Age-for-grade heterogeneity and grade repetition in primary schools in Karonga district, northern Malawi.

    Get PDF
    Timely progression through school is an important measure for school performance, completion and the onset of other life transitions for adolescents. This study examines the risk factors for grade repetition and establishes the extent to which age-for-grade heterogeneity contributes to subsequent grade repetition at early and later stages of school. Using data from a demographic surveillance site in Karonga district, northern Malawi, a cohort of 8174 respondents (ages 5-24 years) in primary school was followed in 2010 and subsequent grade repetition observed in 2011. Grade repetition was more common among those at early (grades 1-3) and later (grades 7-8) stages of school, with little variation by sex. Being under-age or over-age in school has different implications on schooling outcomes, depending on the stage of schooling. After adjusting for other risk factors, boys and girls who were under-age at early stages were at least twice as likely to repeat a grade as those at the official age-for-grade (girls: adjusted OR 2.06 p < 0.01; boys: adjusted OR 2.37 p < 0.01); while those over-age at early stages were about 30% less likely to repeat (girls: adjusted OR 0.65 p < 0.01; boys: adjusted OR 0.72 p < 0.01). Being under/over-age at later grades (4-8) was not associated with subsequent repetition but being over-age was associated with dropout. Other risk factors identified that were associated with repetition included both family-level factors (living away from their mother, having young children in the household, lower paternal education) and school-level factors (higher student-teacher ratio, proportion of female teachers and schools without access to water). Reducing direct and indirect costs of schooling for households; and improving school quality and resources at early stages of school may enable timely progression at early stages for greater retention at later stages

    Comparison of Propensity Score Methods and Covariate Adjustment: Evaluation in 4 Cardiovascular Studies.

    Get PDF
    Propensity scores (PS) are an increasingly popular method to adjust for confounding in observational studies. Propensity score methods have theoretical advantages over conventional covariate adjustment, but their relative performance in real-word scenarios is poorly characterized. We used datasets from 4 large-scale cardiovascular observational studies (PROMETHEUS, ADAPT-DES [the Assessment of Dual AntiPlatelet Therapy with Drug-Eluting Stents], THIN [The Health Improvement Network], and CHARM [Candesartan in Heart Failure-Assessment of Reduction in Mortality and Morbidity]) to compare the performance of conventional covariate adjustment with 4 common PS methods: matching, stratification, inverse probability weighting, and use of PS as a covariate. We found that stratification performed poorly with few outcome events, and inverse probability weighting gave imprecise estimates of treatment effect and undue influence to a small number of observations when substantial confounding was present. Covariate adjustment and matching performed well in all of our examples, although matching tended to give less precise estimates in some cases. PS methods are not necessarily superior to conventional covariate adjustment, and care should be taken to select the most suitable method

    Selective Phenotyping, Entropy Reduction, and the Mastermind game

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>With the advance of genome sequencing technologies, phenotyping, rather than genotyping, is becoming the most expensive task when mapping genetic traits. The need for efficient selective phenotyping strategies, <it>i</it>.<it>e</it>. methods to select a subset of genotyped individuals for phenotyping, therefore increases. Current methods have focused either on improving the detection of causative genetic variants or their precise genomic location separately.</p> <p>Results</p> <p>Here we recognize selective phenotyping as a Bayesian model discrimination problem and introduce SPARE (Selective Phenotyping Approach by Reduction of Entropy). Unlike previous methods, SPARE can integrate the information of previously phenotyped individuals, thereby enabling an efficient incremental strategy. The effective performance of SPARE is demonstrated on simulated data as well as on an experimental yeast dataset.</p> <p>Conclusions</p> <p>Using entropy reduction as an objective criterion gives a natural way to tackle both issues of detection and localization simultaneously and to integrate intermediate phenotypic data. We foresee entropy-based strategies as a fruitful research direction for selective phenotyping.</p

    Phase II, open-label, randomized, multicenter trial (HERBY) of Bevacizumab in pediatric patients with newly diagnosed high-grade glioma

    Get PDF
    Purpose Bevacizumab (BEV) is approved in more than 60 countries for use in adults with recurrent glioblastoma. We evaluated the addition of BEV to radiotherapy plus temozolomide (RT+TMZ) in pediatric patients with newly diagnosed high-grade glioma (HGG). Methods The randomized, parallel group, multicenter, open-label HERBY trial (ClinicalTrials.gov identifier: NCT01390948) enrolled patients age ≥ 3 years to ≤ 18 years with localized, centrally neuropathology-confirmed, nonbrainstem HGG. Eligible patients were randomly assigned to receive RT + TMZ (RT: 1.8 Gy, 5 days per week, and TMZ: 75 mg/m² per day for 6 weeks; 4-week treatment break; then up to 12 3 28-day cycles of TMZ [cycle 1: 150 mg/m² per day, days 1 to 5; cycles 2 to 12: 200 mg/m² per day, days 1 to 5]) with or without BEV (10 mg/kg every 2 weeks). The primary end point was event-free survival (EFS) as assessed by a central radiology review committee that was blinded to treatment. We report findings of EFS at 12 months after the enrollment of the last patient. Results One hundred twenty-one patients were enrolled (RT+TMZ [n = 59]; BEV plus RT+TMZ [n = 62]). Central radiology review committee–assessed median EFS did not differ significantly between treatment groups (RT+TMZ, 11.8 months; 95% CI, 7.9 to 16.4 months; BEV plus RT+TMZ, 8.2 months; 95% CI, 7.8 to 12.7 months; hazard ratio, 1.44; P = .13 [stratified log-rank test]). In the overall survival analysis, the addition of BEV did not reduce the risk of death (hazard ratio, 1.23; 95% CI, 0.72 to 2.09). More patients in the BEV plus RT+TMZ group versus the RT+TMZ group experienced one or more serious adverse events (n = 35 [58%] v n = 27 [48%]), and more patients who received BEV discontinued study treatment as a result of adverse events (n = 13 [22%] v n = 3 [5%]). Conclusion Adding BEV to RT+TMZ did not improve EFS in pediatric patients with newly diagnosed HGG. Our findings were not comparable to those of previous adult trials, which highlights the importance of performing pediatric-specific studies

    Retrospective review of the drop in observer detection performance over time in lesion-enriched experimental studies.

    Get PDF
    The vigilance decrement describes a decrease in sensitivity or increase in specificity with time on task. It has been observed in a variety of repetitive visual tasks, but little is known about these patterns in radiologists. We investigated whether there is systematic variation in performance over the course of a radiology reading session. We re-analyzed data from six previous lesion-enriched radiology studies. Studies featured 8-22 participants assessing 27-100 cases (including mammograms, chest CT, chest x-ray, and bone x-ray) in a reading session. Changes in performance and speed as the reading session progressed were analyzed using mixed effects models. Time taken per case decreased 9-23% as the reading session progressed (p < 0.005 for every study). There was a sensitivity decrease or specificity increase over the course of reading 100 chest x-rays (p = 0.005), 60 bone fracture x-rays (p = 0.03), and 100 chest CT scans (p < 0.0001). This effect was not found in the shorter mammography sessions with 27 or 50 cases. We found evidence supporting the hypothesis that behavior and performance may change over the course of reading an enriched test set. Further research is required to ascertain whether this effect is present in radiological practice

    Burke-Fahn-Marsden dystonia severity, Gross Motor, Manual Ability, and Communication Function Classification scales in childhood hyperkinetic movement disorders including cerebral palsy: a 'Rosetta Stone' study.

    No full text
    AIM: Hyperkinetic movement disorders (HMDs) can be assessed using impairment-based scales or functional classifications. The Burke-Fahn-Marsden Dystonia Rating Scale-movement (BFM-M) evaluates dystonia impairment, but may not reflect functional ability. The Gross Motor Function Classification System (GMFCS), Manual Ability Classification System (MACS), and Communication Function Classification System (CFCS) are widely used in the literature on cerebral palsy to classify functional ability, but not in childhood movement disorders. We explore the concordance of these three functional scales in a large sample of paediatric HMDs and the impact of dystonia severity on these scales. METHOD: Children with HMDs (n=161; median age 10y 3mo, range 2y 6mo-21y) were assessed using the BFM-M, GMFCS, MACS, and CFCS from 2007 to 2013. This cross-sectional study contrasts the information provided by these scales. RESULTS: All four scales were strongly associated (all Spearman's rank correlation coefficient rs >0.72, p<0.001), with worse dystonia severity implying worse function. Secondary dystonias had worse dystonia and less function than primary dystonias (p<0.001). A longer proportion of life lived with dystonia is associated with more severe dystonia (rs =0.42, p<0.001). INTERPRETATION: The BFM-M is strongly linked with the GMFCS, MACS, and CFCS, irrespective of aetiology. Each scale offers interrelated but complementary information and is applicable to all aetiologies. Movement disorders including cerebral palsy can be effectively evaluated using these scales
    corecore